The activity of today was two-fold, with the focus being on understanding the relationship between the MIR/MAR reallocation issue (and the consequent 1.8 Hz oscillation) and the CARM loop gain. The underlying idea was that the CARM loop was marginal at the higher end of its phase bubble, and even the small change in gain induced by the change in the reallocation strategy between the HighPower and the LowNoise configurations could induce the crossing of CARM in the instability region.
The first test we did was in the (currently) standard configuration during science mode, with the splitting filters misbalanced towards the MIR correction:
- 16:00:00 UTC: clean data, 240 s;
- 16:04:30 UTC: CARM loop gain increased by 5% (16 -> 16.8), 500 s;
- 16:14:30 UTC: CARM loop gain increased up to +7.5% (16.8 -> 17.2), 500 s;
- 16:25:40 UTC: CARM loop gain decreased down to ~-20% (17.2 -> 13), 600 s.
The test is reported in Figure 1, where the four traces are respectively the purple, red, black and blue ones: the structures around 1.8 Hz, already present in the standard configuration, got progressively worse with the gain increased, and completely disappeared with the low gain.
We did then a test to restore the MIR/MAR reallocation filters in the low-gain condition, but this still caused the oscillation to rise and kill the lock (Figure 2).
At this point, remembering that the current CARM gain is much higher (factor ~2) than the old one, and this was mainly due to issues in the CARM loop engagement (which happens with a different filter at the end of LOCKING_CARM_NULL_1F), we wanted to test the old configuration for the lock acquisition:
- we left the old reallocation filters we used in the past and the test in Figure 2 ;
- we reduced the CARM gain from 16 to 9 (the one we used back in the day); in order to avoid getting back the engagement issues, we added the CARM gain reduction at the beginning of ACQUIRE_LOW_NOISE_1.
Indeed the test was successful, and we left this configuration.
Then we did some noise injections on CARM, to understand better the limits with respect to B1 saturations; we used the standars LSC_noise_MICHband filter, with different amplitudes (Figure 3):
- 18:37:30 UTC: amplitude 5e-5, 120 s;
- 18:40:30 UTC: amplitude 8e-5, 200 s; here we observed 1 glitch in DARM (following a 'canonical' 25min glitch);
- 18:45:25 UTC: amplitude 1e-4, 180 s; several glitches were observed;
- 18:48:30 UTC: amplitude 2e-4; glitches were now appearing consistently, also impacting SR alignment; the injection was quickly stopped.
We need to understand how to reduce such glitchyness without losing effectiveness in the noise injection, which is not that great to begin with; the shape of the filter is already at high frequency, relatively speaking, and moving it towards lower frequencies is tricky; also, much of the B1_DC r.m.s. is gained at 5.2 Hz (DIFFp_TY line) and at low frequency in general. Given the saturation level being around +-0.06, we are never away more than a factor of 2-3.
The other topic of today was confined in the very last part of the shift, given the importance of the first one; we just made manually a noise injection on BS_TY OpLev in order to verify the shape of the filter and the overall noise: the shape looks fine, and a noise injection of 10x the sensing noise level had no visible effect (18:55:30 UTC, 300 s, Figure4, in purple noise is generated but not injected).
More analysis on both topics may follow.